9 research outputs found

    Hydration of arsenic oxyacid species

    Get PDF
    The bond distances in hydrated arsenic oxyacid species in aqueous solution have been studied by EXAFS spectroscopy and large angle X-ray scattering, LAXS. These results have been compared to structures in the solid state, as found in an extensive survey of available crystal structures. Protonated oxygen atoms can be distinguished with a longer As-O distance for both arsenic(V) and arsenic(III) species in the crystalline state. However, the average As-O distance for the H nAsO4(3-n)- species (0≤n≤3) remains the same. These average values are slightly shorter, ca. 0.02 Å, than in aqueous solution due to the hydration as determined by EXAFS and LAXS. The K absorption edges for arsenic(V) and arsenic(III) species are separated by 4.0 eV, and the shape of the absorption edges differs as well. Small but significant differences in the absorption edge features are seen between the neutral acids and the charged oxyacid species. The most important arsenic species from an environmental point of view is arsenous acid, As(OH)3. In addition to arsenous acid, we have used orthotelluric acid, Te(OH)6, for comparison with arsenous acid and for detailed studies of the hydration of covalently bound hydroxo groups. Arsenous acid cannot be studied with the same accuracy as orthotelluric acid, due to a relatively low solubility of As2O3(s) in neutral to acidic aqueous solution. The results from the DDIR studies support the assignment of As(OH)3 as a weak structure maker analogous to Te(OH)6, both being neutral weak oxyacids

    SpotiBot : Turing Testing Spotify

    No full text
    Even if digitized and born-digital audiovisual material today amounts to a steadily increasing body of data to work with and research, such media modalities are still relatively poorly represented in the field of DH. Streaming media is a case in point, and the purpose of this article is to provide some findings from an ongoing audio (and music) research project, that deals with experiments, interventions and the reverse engineering of Spotify’s algorithms, aggregation procedures, and valuation strategies. One such research experiment, the SpotiBot intervention, was set up at Humlab, Umeå University. Via multiple bots running in parallel our idea was to examine if it is possible to provoke — or even undermine — the Spotify business model (based on the so called “30 second royalty rule”). Essentially, the experiment resembled a Turing test, where we asked ourselves what happens when — not if — streaming bots approximate human listener behavior in such a way that it becomes impossible to distinguish between a human and a machine. Implemented in the Python programming language, and using a web UI testing frameworks, our so called SpotiBot engine automated the Spotify web client by simulating user interaction within the web interface. The SpotiBot engine was instructed to play a single track repeatedly (both self-produced music and Abba’s “Dancing Queen”), during less and more than 30 seconds, and with a fixed repetition scheme running from 100 to n times (simultaneously with different Spotify Free ‘bot accounts’). Our bots also logged all results. In short, our bots demonstrated the ability (at least sometimes) to continuously play tracks, indicating that the Spotify business model can be tampered with. Using a single virtual machine — hidden behind only one proxy IP — the results of the intervention hence stipulate that it is possible to automatically play tracks for thousands of repetitions that exceeds the royalty rule

    Противодействие коррупции

    No full text
    To curate processes within the digital humanities, an elaborate infrastructure of technology, supporting processes, physical spaces, competences, and human attitudes is needed, which can be challenging to create and sustain in the academia of humanities. In this poster we will share our experiences, good as well as bad, and how we have tackled the challenges of working within the digital humanities. The physical spaces of HUMlab are open and accessible, where technicians, students, and researchers from a wide variety of fields can meet and collaborate. The spaces in HUMlab have been designed with the aim of creating an appealing and attractive ‘meeting place’ with a technological infrastructure that breaks interdisciplinary barriers. The codesign of digital research methodologies and tools also functions across the disciplines and joins knowledge from different fields. The supporting processes, and the way they are executed, emphasise collaboration, knowledge sharing, and joint venture. The project model used in software development is based on an agile approach that has been adapted to the special needs and demands of academia and research within the humanities. Supporting workflows have been specified and implemented (e.g., stakeholder discussion, project initialization) with tollgates and templates. The real challenge is to create formalized workflows that promote new ideas, quality, creativity, innovation, and individual development. An open mindset is required to achieve and sustain interdisciplinarity and collaboration on equal terms. The working process must allow mistakes and encourage new ideas. Part of the challenge is to build trust and share knowledge in a dialogue that translates scholarly needs with technology to give added values. Technology plays an important part of HUMlab (e.g., a multitude of screen scapes), but even more important is the critical attitude towards the technology and how it is used. It is vital to understand the underlying epistemology of different technologies, and the methods and tools, and to have transparency on how they are applied in order to achieve certain (research) objectives. A real challenge is to sustain the numerous competences needed within the fields of digital humanities and humanities computing (especially when you don’t know the needs of the next collaboration). At HUMlab, this is done by so-called pet projects (freedom to work with personal projects), focus projects (small projects to expand knowledge in certain areas, and to step out of your ‘comfort zone’), assigned fields of interest (personal responsibility to sustain knowledge for a specific fields), and a competence matrix at an organizational level that is based on HUMlab’s needs and vision for the future, but also dynamic and flexible and adapting to an ever-changing world

    SpotiBot : Turing Testing Spotify

    No full text
    Even if digitized and born-digital audiovisual material today amounts to a steadily increasing body of data to work with and research, such media modalities are still relatively poorly represented in the field of DH. Streaming media is a case in point, and the purpose of this article is to provide some findings from an ongoing audio (and music) research project, that deals with experiments, interventions and the reverse engineering of Spotify’s algorithms, aggregation procedures, and valuation strategies. One such research experiment, the SpotiBot intervention, was set up at Humlab, Umeå University. Via multiple bots running in parallel our idea was to examine if it is possible to provoke — or even undermine — the Spotify business model (based on the so called “30 second royalty rule”). Essentially, the experiment resembled a Turing test, where we asked ourselves what happens when — not if — streaming bots approximate human listener behavior in such a way that it becomes impossible to distinguish between a human and a machine. Implemented in the Python programming language, and using a web UI testing frameworks, our so called SpotiBot engine automated the Spotify web client by simulating user interaction within the web interface. The SpotiBot engine was instructed to play a single track repeatedly (both self-produced music and Abba’s “Dancing Queen”), during less and more than 30 seconds, and with a fixed repetition scheme running from 100 to n times (simultaneously with different Spotify Free ‘bot accounts’). Our bots also logged all results. In short, our bots demonstrated the ability (at least sometimes) to continuously play tracks, indicating that the Spotify business model can be tampered with. Using a single virtual machine — hidden behind only one proxy IP — the results of the intervention hence stipulate that it is possible to automatically play tracks for thousands of repetitions that exceeds the royalty rule

    The intricate details of using research databases and repositories for environmental archaeology data

    No full text
    Environmental archaeology is a complex mix of empirical analysis and qualitative interpretation.It is increasingly data science oriented, and databases and online resources are becoming increasinglyimportant in large scale synthesis research on changes in climate, environments and human activities.Research funders, journals and universities place much emphasis on the use of data repositories toensure transparency and reusability in the research process. Although these are important, researchersthemselves, however, may have more use for research databases which are oriented more towardsadvanced querying and exploratory data analysis than conforming to archiving standards. This paperexplores the pros and cons of these different approaches. It also discusses and problematizes somekey concepts in research data management, including the definitions of data and metadata, along withthe FAIR principles. Research examples are provided from a broad field of environmental archaeologyand palaeoecology. In contrast to most publications, the developer’s perspective is also included, anda worked example using the Strategic Environmental Archaeology Database (SEAD) to investigate fossilbeetle data demonstrates the implementation of some of this in the real world. This example may befollowed online using the SEAD browser, and all described data downloaded from there. After providingboth encouragement and warnings on the use of digital resources for synthesis research, some suggestionsare made for moving forward

    Genetic variation and gene expression across multiple tissues and developmental stages in a nonhuman primate

    Get PDF
    By analyzing multitissue gene expression and genome-wide genetic variation data in samples from a vervet monkey pedigree, we generated a transcriptome resource and produced the first catalog of expression quantitative trait loci (eQTLs) in a nonhuman primate model. This catalog contains more genome-wide significant eQTLs per sample than comparable human resources and identifies sex- and age-related expression patterns. Findings include a master regulatory locus that likely has a role in immune function and a locus regulating hippocampal long noncoding RNAs (lncRNAs), whose expression correlates with hippocampal volume. This resource will facilitate genetic investigation of quantitative traits, including brain and behavioral phenotypes relevant to neuropsychiatric disorders
    corecore